Robust Shift-and-Invert Preconditioning: Faster and More Sample Efficient Algorithms for Eigenvector Computation

نویسندگان

  • Chi Jin
  • Sham M. Kakade
  • Cameron Musco
  • Praneeth Netrapalli
  • Aaron Sidford
چکیده

In this paper we provide faster algorithms and improved sample complexities for approximating the top eigenvector of a matrix A>A. In particular we give the following results for computing an approximate eigenvector i.e. some x such that x>A>Ax ≥ (1− )λ1(AA): • Offline Eigenvector Estimation: Given an explicit matrix A ∈ Rn×d, we show how to compute an approximate top eigenvector in time Õ ([ nnz(A) + d sr(A) gap2 ] · log 1/ ) and Õ ([ nnz(A)(d sr(A)) √ gap ] · log 1/ ) . Here sr(A) is the stable rank, gap is the multiplicative gap between the largest and second largest eigenvalues, and Õ(·) hides log factors in d and gap. By separating the gap dependence from nnz(A) our first runtime improves classic iterative algorithms such as the power and Lanczos methods. It also improves on previous work separating the nnz(A) and gap terms using fast subspace embeddings [AC09, CW13] and stochastic optimization [Sha15c]. We obtain significantly improved dependencies on sr(A) and and our second running time improves this further when nnz(A) ≤ d sr(A) gap2 . • Online Eigenvector Estimation: Given a distribution D over vectors a ∈ R with covariance matrix Σ and a vector x0 which is an O(gap) approximate top eigenvector for Σ, we show how to compute an approximate eigenvector using Õ ( v(D) gap2 + v(D) gap· ) samples from D. Here v(D) is a natural notion of the variance of D. Combining our algorithm with a number of existing algorithms to initialize x0 we obtain improved sample complexity and runtime results under a variety of assumptions on D. Notably, we show that, for general distributions, our sample complexity result is asymptotically optimal we achieve optimal accuracy as a function of sample size as the number of samples grows large. We achieve our results using a general framework that we believe is of independent interest. We provide a robust analysis of the classic method of shift-and-invert preconditioning to reduce eigenvector computation to approximately solving a sequence of linear systems. We then apply variants of stochastic variance reduced gradient descent (SVRG) and additional recent advances in solving linear systems to achieve our claims. We believe our results suggest the generality and effectiveness of shift-and-invert based approaches and imply that further computational improvements may be reaped in practice. 1 ar X iv :1 51 0. 08 89 6v 1 [ cs .D S] 2 9 O ct 2 01 5

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Faster Eigenvector Computation via Shift-and-Invert Preconditioning

We give faster algorithms and improved sample complexities for the fundamental problem of estimating the top eigenvector. Given an explicit matrix A ∈ Rn×d, we show how to compute an approximate top eigenvector of A>A in time Õ ([ nnz(A) + d sr(A) gap2 ] · log 1/ ) . Here nnz(A) is the number of nonzeros in A, sr(A) is the stable rank, and gap is the relative eigengap. We also consider an onlin...

متن کامل

Efficient coordinate-wise leading eigenvector computation

We develop and analyze efficient ”coordinatewise” methods for finding the leading eigenvector, where each step involves only a vector-vector product. We establish global convergence with overall runtime guarantees that are at least as good as Lanczos’s method and dominate it for slowly decaying spectrum. Our methods are based on combining a shift-and-invert approach with coordinate-wise algorit...

متن کامل

On Large-Scale Diagonalization Techniques for the Anderson Model of Localization

We propose efficient preconditioning algorithms for an eigenvalue problem arising in quantum physics, namely the computation of a few interior eigenvalues and their associated eigenvectors for the largest sparse real and symmetric indefinite matrices of the Anderson model of localization. We compare the Lanczos algorithm in the 1987 implementation by Cullum and Willoughby with the shift-and-inv...

متن کامل

Two Iterative Algorithms for Computing the Singular Value Decomposition from Input/Output Samples

The Singular Value Decomposition (SVD) is an important tool for linear algebra and can be used to invert or approximate matrices. Although many authors use "SVD" synonymously with "Eigenvector Decomposition" or "Principal Components Transform", it is important to realize that these other methods apply only to symmetric matrices, while the SVD can be applied to arbitrary nonsquare matrices. This...

متن کامل

Efficient and Robust Parameter Tuning for Heuristic Algorithms

The main advantage of heuristic or metaheuristic algorithms compared to exact optimization methods is their ability in handling large-scale instances within a reasonable time, albeit at the expense of losing a guarantee for achieving the optimal solution. Therefore, metaheuristic techniques are appropriate choices for solving NP-hard problems to near optimality. Since the parameters of heuristi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1510.08896  شماره 

صفحات  -

تاریخ انتشار 2015